This is exactly the same as the others but we’re going to limit ourselves only to the people in Experiment 2 who we could fit well, just to see if our results are robust to that. If they change dramatically that indicates that they are being shaped by people we can’t fit anyway.

Let’s get rid of people

I’m going to arbitrarily decide to get rid of the people whose fit in the (beta,gamma) case is worse than 0.01 for either of the conditions. This removes 50 out of 239 people.

Basic performance

First let’s have a look at the peaked and uniform conditions, which had different priors but each saw four red and one blue.

FITS

Looking at fitted values: \(\beta\), \(\delta\), and \(\gamma\)

Let’s have a look at the distribution of parameters. As a reminder:

\(\beta\): determines to what degree participants use their stated prior, and to what degree they use the uniform distribution to deduce their posterior. If \(\beta=1\) that means it’s all prior, if \(\beta=0\) it is all uniform distribution

\(\gamma\): how much they weighted the red chips they saw. if \(\gamma=1\) it is veridical, lower is underweighting, higher is overweighting.

\(\delta\): how much they weighted the blue chips they saw. if \(\delta=1\) it is veridical, lower is underweighting, higher is overweighting.

Note that there is no \(\beta\) parameter in the uniform condition, since the given prior is already uniform. So all the things with three parameters don’t include it.

And also a 3D plot

Finally, let’s look at histograms of all of the variables.

## Warning: Removed 91 rows containing non-finite values (stat_bin).

Looking at fitted values: \(\beta\) alone

I wanted to see if the beta values would be different if we weren’t also fitting \(\delta\) and \(\gamma\). So let’s look at that histogram.

For the priors: 19.4% had \(\beta\) less than 0.1, and 31.6% had \(\beta\) greater than 0.9.

Looking at fitted values: \(\gamma\) alone

I wanted to see if the \(\gamma\) values would be different if we weren’t also fitting \(\beta\) and \(\delta\). So let’s look at that histogram.

For the likelihoods: 68.4% in PEAKED had \(\gamma\) less than 1 (no need to run UNIFORM, as that would be the same as the \(\gamma\) distribution below).

Looking at fitted values: \(\beta\) and \(\gamma\) alone

Here we assume one parameter (call it \(\gamma\)) instead of two separate ones (\(\gamma\) and \(\delta\)). i.e. this forces them to be the same and can be thought of as a conservatism parameter. I wanted to look at this because I think it might be a lot more interpretable than having both.

\(\beta\): determines to what degree participants use their stated prior, and to what degree they use the uniform distribution to deduce their posterior. If \(\beta=1\) that means it’s all prior, if \(\beta=0\) it is all uniform distribution

\(\gamma\): how much they weighted the chips they saw. if \(\gamma=1\) it is veridical, lower is underweighting, higher is overweighting.

Let’s calculate the Spearman correlation:

cor.test(de2p_fittwo$beta,de2p_fittwo$gamma,method="spearman")
## Warning in cor.test.default(de2p_fittwo$beta, de2p_fittwo$gamma, method =
## "spearman"): Cannot compute exact p-value with ties
## 
##  Spearman's rank correlation rho
## 
## data:  de2p_fittwo$beta and de2p_fittwo$gamma
## S = 161580, p-value = 0.7681
## alternative hypothesis: true rho is not equal to 0
## sample estimates:
##         rho 
## -0.03016165

And a histogram of them too

## Warning: Removed 91 rows containing non-finite values (stat_bin).

For the priors: 24.5% had \(\beta\) less than 0.1, and 4.1% had \(\beta\) greater than 0.9.

For the likelihoods: 34.7% in PEAKED and 54.9% in UNIFORM had \(\gamma\) less than 1 (i.e., were conservative).

Individuals: Peaked prior

So now let’s look at individuals - look at their posterior, based on the best-fit beta and gamma.

red line: the given prior

dark green line: their reported posterior

dotted grey: Bayes rule prediction assuming uniform prior (not shown)

solid grey: Bayes rule prediction assuming the given prior

dotted black: line based on best-fit beta and gamma

Individuals: Uniform prior

So now let’s look at individuals - look at their posterior, based on the best-fit beta and gamma.

red line: the given prior

light green line: their reported posterior

solid grey: Bayes rule prediction assuming uniform prior

dotted black: line based on best-fit beta and gamma

We can also get a sense of how good the fits were.

## TableGrob (1 x 2) "arrange": 2 grobs
##   z     cells    name           grob
## 1 1 (1-1,1-1) arrange gtable[layout]
## 2 2 (1-1,2-2) arrange gtable[layout]

Aggregate fits

We can also look at the parameter values for the aggregate fits.